18 research outputs found

    Empirical Evaluation of Contextual Policy Search with a Comparison-based Surrogate Model and Active Covariance Matrix Adaptation

    Full text link
    Contextual policy search (CPS) is a class of multi-task reinforcement learning algorithms that is particularly useful for robotic applications. A recent state-of-the-art method is Contextual Covariance Matrix Adaptation Evolution Strategies (C-CMA-ES). It is based on the standard black-box optimization algorithm CMA-ES. There are two useful extensions of CMA-ES that we will transfer to C-CMA-ES and evaluate empirically: ACM-ES, which uses a comparison-based surrogate model, and aCMA-ES, which uses an active update of the covariance matrix. We will show that improvements with these methods can be impressive in terms of sample-efficiency, although this is not relevant any more for the robotic domain.Comment: Supplementary material for poster paper accepted at GECCO 2019; https://doi.org/10.1145/3319619.332193

    Common Data Fusion Framework : An open-source Common Data Fusion Framework for space robotics

    Get PDF
    Multisensor data fusion plays a vital role in providing autonomous systems with environmental information crucial for reliable functioning. In this article, we summarize the modular structure of the newly developed and released Common Data Fusion Framework and explain how it is used. Sensor data are registered and fused within the Common Data Fusion Framework to produce comprehensive 3D environment representations and pose estimations. The proposed software components to model this process in a reusable manner are presented through a complete overview of the framework, then the provided data fusion algorithms are listed, and through the case of 3D reconstruction from 2D images, the Common Data Fusion Framework approach is exemplified. The Common Data Fusion Framework has been deployed and tested in various scenarios that include robots performing operations of planetary rover exploration and tracking of orbiting satellites

    Data fusion framework for planetary and orbital robotics applications

    Get PDF
    In space robotics, a wide range of sensor data fusion methods are required to accomplish challenging objectives for exploration, science and commercial purposes. This includes navigation for planetary and guidance for orbital robotics, scientific prospecting, and on-orbit servicing. In Fuse provides a comprehensive data fusion framework or toolset to fuse and interpret sensor data from multiple sensors. This project represents an optimal approach to develop software for robotics: a standardized and comprehensive development environment for industrial applications, with particular focus on space applications where components can be connected, tested offline, evaluated and deployed in any preferred robotic framework, including those devised for space or terrestrial applications. This paper discusses the results of verification and validation of data fusion methods for robots deployed in orbital and planetary scenarios using data sets collected in simulation and outdoor analogue campaigns

    Learning and generalizing behaviors for robots from human demonstration

    No full text
    Behavior learning is a promising alternative to planning and control for behavior generation in robotics. The field is becoming more and more popular in applications where modeling the environment and the robot is cumbersome, difficult, or maybe even impossible. Learning behaviors for real robots that generalize over task parameters with as few interactions with the environment as possible is a challenge that this dissertation tackles. Which problems we can currently solve with behavior learning algorithms and which algorithms we need in the domain of robotics is not apparent at the moment as there are many related fields: imitation learning, reinforcement learning, self-supervised learning, and black-box optimization. After an extensive literature review, we decide to use methods from imitation learning and policy search to address the challenge. Specifically, we use human demonstrations recorded by motion capture systems and imitation learning with movement primitives to obtain initial behaviors that we later on generalize through contextual policy search. Imitation from motion capture data leads to the correspondence problem: the kinematic and dynamic capabilities of humans and robots are often fundamentally different and, hence, we have to compensate for that. This thesis proposes a procedure for automatic embodiment mapping through optimization and policy search and evaluates it with several robotic systems. Contextual policy search algorithms are often not sample efficient enough to learn directly on real robots. This thesis tries to solve the issue with active context selection, active training set selection, surrogate models, and manifold learning. The progress is illustrated with several simulated and real robot learning tasks. Strong connections between policy search and black-box optimization are revealed and exploited in this part of the thesis. This thesis demonstrates that learning manipulation behaviors is possible within a few hundred episodes directly on a real robot. Furthermore, these new approaches to imitation learning and contextual policy search are integrated in a coherent framework that can be used to learn new behaviors from human motion capture data almost automatically. Corresponding implementations that were developed during this thesis are available in an open source software

    throwing trajectories

    No full text
    Throwing trajectories recorded with a motion capture syste

    A Modular Approach to the Embodiment of Hand Motions from Human Demonstrations

    Full text link
    Manipulating objects with robotic hands is a complicated task. Not only the fingers of the hand, but also the pose of the robot's end effector need to be coordinated. Using human demonstrations of movements is an intuitive and data-efficient way of guiding the robot's behavior. We propose a modular framework with an automatic embodiment mapping to transfer recorded human hand motions to robotic systems. In this work, we use motion capture to record human motion. We evaluate our approach on eight challenging tasks, in which a robotic hand needs to grasp and manipulate either deformable or small and fragile objects. We test a subset of trajectories in simulation and on a real robot and the overall success rates are aligned
    corecore